Skip to content

MethodMeta validation#18724

Merged
lucylq merged 1 commit intomainfrom
security28
Apr 10, 2026
Merged

MethodMeta validation#18724
lucylq merged 1 commit intomainfrom
security28

Conversation

@lucylq
Copy link
Copy Markdown
Contributor

@lucylq lucylq commented Apr 6, 2026

  • Reject negative memory planning values
  • Reject negative tensor dimension sizes in calculate_nbytes()
  • Add null-pointer checks before TensorInfo::create to catch invalid tensor metadata in the flatbuffer.

This PR was authored with the assistance of Claude.

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 6, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18724

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 3 New Failures, 2 Unrelated Failures

As of commit eba5644 with merge base 3d2c853 (image):

NEW FAILURES - The following jobs have failed:

FLAKY - The following job failed but was likely due to flakiness present on trunk:

BROKEN TRUNK - The following job failed but was present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 6, 2026
@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 6, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@lucylq lucylq force-pushed the security28 branch 3 times, most recently from 6be51fa to c8339db Compare April 6, 2026 18:48
@lucylq lucylq changed the title Validate buffer sizes and tensor metadata in MethodMeta MethodMeta validation Apr 6, 2026
@lucylq lucylq marked this pull request as ready for review April 6, 2026 18:49
@lucylq lucylq requested a review from JacobSzwejbka as a code owner April 6, 2026 18:50
Copilot AI review requested due to automatic review settings April 6, 2026 18:50
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR strengthens runtime validation of serialized method metadata to fail fast on malformed programs, focusing on tensor shape sizing and memory planning buffer sizes.

Changes:

  • Add validation to reject negative tensor dimension sizes during byte-size calculation.
  • Add null checks when extracting tensor metadata for inputs/outputs.
  • Reject negative memory-planned buffer sizes and add required formatting support (<cinttypes> for PRId64).
Comments suppressed due to low confidence (3)

runtime/executor/method_meta.cpp:111

  • calculate_nbytes() can now return InvalidProgram for negative sizes, but TensorInfo::create() converts any failure into InvalidArgument and logs only a generic message. This loses the specific error kind and makes invalid programs look like caller misuse; consider returning nbytes.error() (and/or forwarding the detailed log message) instead of hardcoding InvalidArgument here.
  auto nbytes = calculate_nbytes(sizes, scalar_type);
  ET_CHECK_OR_RETURN_ERROR(
      nbytes.ok(),
      InvalidArgument,
      "Failed to calculate nbytes for TensorInfo");

runtime/executor/method_meta.cpp:211

  • Even with the new tensor_value != nullptr guard, this code still dereferences tensor_value->sizes() and tensor_value->dim_order() without checking those vectors are present. A malformed/partial flatbuffer can have these fields unset (nullptr), which would crash before TensorInfo::create() runs. Consider validating tensor_value->sizes()/dim_order() are non-null before calling ->data()/->size().
  return TensorInfo::create(
      Span<const int32_t>(
          tensor_value->sizes()->data(), tensor_value->sizes()->size()),
      Span<const uint8_t>(
          tensor_value->dim_order()->data(), tensor_value->dim_order()->size()),
      static_cast<executorch::aten::ScalarType>(tensor_value->scalar_type()),

runtime/executor/method_meta.cpp:267

  • Same null-dereference risk as inputs: tensor_value->sizes() / tensor_value->dim_order() are used without checking for nullptr. Please validate these flatbuffer vectors exist before dereferencing so malformed programs fail gracefully instead of crashing.
  return TensorInfo::create(
      Span<const int32_t>(
          tensor_value->sizes()->data(), tensor_value->sizes()->size()),
      Span<const uint8_t>(
          tensor_value->dim_order()->data(), tensor_value->dim_order()->size()),
      static_cast<executorch::aten::ScalarType>(tensor_value->scalar_type()),

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread runtime/executor/method_meta.cpp Outdated
Comment on lines +63 to +68
ET_CHECK_OR_RETURN_ERROR(
sizes[i] >= 0,
InvalidProgram,
"Invalid size[%zu]: %d. Size must not be negative",
i,
sizes[i]);
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Format string uses %d for int32_t (sizes[i]). In this codebase, PRId32 is used for int32_t to avoid type/format mismatches across platforms; update this (and the other %d occurrences in calculate_nbytes) to use "%" PRId32.

Copilot uses AI. Check for mistakes.
Comment on lines +346 to +353
int64_t size = s_plan_->non_const_buffer_sizes()->Get(index + 1);
ET_CHECK_OR_RETURN_ERROR(
size >= 0,
InvalidProgram,
"memory_planned_buffer_size(%zu) has invalid negative size: %" PRId64,
index,
size);
return size;
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New validation paths (negative dimension sizes, null tensor_value, negative memory-planned buffer sizes) don’t appear to have dedicated tests. Since there are already gtests for MethodMeta/TensorInfo, please add coverage that asserts the expected Error codes (and that we don’t crash) for these invalid metadata cases.

Copilot uses AI. Check for mistakes.
Copilot AI review requested due to automatic review settings April 8, 2026 21:58
tensor_value->dim_order() != nullptr,
InvalidProgram,
"Null tensor metadata for input %zu",
index);
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: i'd like to move these to program_validation later. Currently program_validation is only run when Verification::InternalConsistency is set, though.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 1 out of 1 changed files in this pull request and generated 7 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +63 to +68
ET_CHECK_OR_RETURN_ERROR(
sizes[i] >= 0,
InvalidProgram,
"Invalid size[%zu]: %d. Size must not be negative",
i,
sizes[i]);
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new negative-dimension check returns Error::InvalidProgram from calculate_nbytes(), but TensorInfo::create() currently converts any calculate_nbytes() failure into Error::InvalidArgument and logs a second generic message. If callers are expected to distinguish invalid programs from invalid arguments (and to avoid double-logging), propagate the underlying error from calculate_nbytes() instead of overriding it in TensorInfo::create().

Copilot uses AI. Check for mistakes.
Comment on lines 194 to +198
auto input_index = s_plan_->inputs()->Get(index);
// input_index was already validated by input_tag().
auto tensor_value = s_plan_->values()->Get(input_index)->val_as_Tensor();
ET_CHECK_OR_RETURN_ERROR(
tensor_value != nullptr && tensor_value->sizes() != nullptr &&
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

input_tensor_meta() still dereferences s_plan_->values()->Get(input_index) before the new null checks. Flatbuffer vectors can contain null table offsets, so an invalid program could still crash here. Add a null check for the EValue pointer (and return InvalidProgram) before calling ->val_as_Tensor().

Copilot uses AI. Check for mistakes.
Comment thread runtime/executor/method_meta.cpp
Comment on lines 251 to +255
auto output_index = s_plan_->outputs()->Get(index);
// output_index was already validated by output_tag().
auto tensor_value = s_plan_->values()->Get(output_index)->val_as_Tensor();

ET_CHECK_OR_RETURN_ERROR(
tensor_value != nullptr && tensor_value->sizes() != nullptr &&
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

output_tensor_meta() still dereferences s_plan_->values()->Get(output_index) before the new null checks. If the values vector contains a null EValue offset, this will crash before we can return InvalidProgram. Add a null check for the EValue pointer prior to calling ->val_as_Tensor().

Copilot uses AI. Check for mistakes.
Comment thread runtime/executor/method_meta.cpp
Comment thread runtime/executor/method_meta.cpp
Comment on lines +351 to +355
int64_t size = s_plan_->non_const_buffer_sizes()->Get(index + 1);
ET_CHECK_OR_RETURN_ERROR(
size >= 0,
InvalidProgram,
"memory_planned_buffer_size(%zu) has invalid negative size: %" PRId64,
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These new validation paths (negative tensor sizes, null tensor metadata, and negative non_const_buffer_sizes entries) are not currently covered by tests. Please add unit tests that construct/patch an invalid ExecutionPlan and assert the expected Error codes for input/output/attribute meta access and memory_planned_buffer_size().

Copilot uses AI. Check for mistakes.
Copilot AI review requested due to automatic review settings April 8, 2026 22:08
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 1 out of 1 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +63 to +68
ET_CHECK_OR_RETURN_ERROR(
sizes[i] >= 0,
InvalidProgram,
"Invalid size[%zu]: %d. Size must not be negative",
i,
sizes[i]);
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sizes[i] is an int32_t, but this log uses %d for printing it. That’s not guaranteed to be correct for int32_t across platforms (printf type mismatch is UB if int32_t isn’t int). Prefer using the <cinttypes> macros (e.g., PRId32) and cast to int32_t to make the formatting portable (similar to other runtime/executor code).

Copilot uses AI. Check for mistakes.
Comment on lines 276 to +280
auto value = values->Get(i);
if (value->val_type() == executorch_flatbuffer::KernelTypes::Tensor) {
auto tensor_value = value->val_as_Tensor();
if (tensor_value->extra_tensor_info() != nullptr &&
if (tensor_value != nullptr &&
tensor_value->extra_tensor_info() != nullptr &&
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

values->Get(i) can be nullptr (other code paths explicitly validate serialization_value != nullptr before dereferencing). In both num_attributes() and attribute_tensor_meta(), value->val_type() is called without checking value, which can still crash on corrupt/partially-validated flatbuffers. Consider guarding with if (value == nullptr) continue; (or an ET_CHECK_OR_RETURN_ERROR in the Result-returning function) before accessing val_type()/val_as_Tensor().

Copilot uses AI. Check for mistakes.
Comment thread runtime/executor/method_meta.cpp
Copilot AI review requested due to automatic review settings April 9, 2026 18:53
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 1 out of 1 changed files in this pull request and generated 4 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread runtime/executor/method_meta.cpp Outdated
Comment on lines +197 to +201
auto serialization_value = s_plan_->values()->Get(input_index);
ET_CHECK_OR_RETURN_ERROR(
serialization_value != nullptr,
InvalidProgram,
"Null value for input %zu at index %d",
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new serialization_value != nullptr check is too late to prevent crashes on malformed flatbuffers: input_tensor_meta() calls input_tag() first, and input_tag() dereferences s_plan_->values()->Get(input_index) without a null check. Consider moving the null check into input_tag() / get_tag() (and/or validating s_plan_->values() is non-null) so a null entry is reported as InvalidProgram instead of segfaulting.

Copilot uses AI. Check for mistakes.
Comment thread runtime/executor/method_meta.cpp Outdated
Comment on lines +261 to +265
auto serialization_value = s_plan_->values()->Get(output_index);
ET_CHECK_OR_RETURN_ERROR(
serialization_value != nullptr,
InvalidProgram,
"Null value for output %zu at index %d",
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same issue as inputs: output_tensor_meta() calls output_tag() before this null check, but output_tag() dereferences s_plan_->values()->Get(output_index) without validating the entry is non-null. To reliably catch corrupted flatbuffers, add the null check in output_tag() / get_tag() (and/or validate s_plan_->values() is non-null) before any dereference.

Copilot uses AI. Check for mistakes.
Comment on lines 310 to +314
for (size_t i = 0; i < values->size(); ++i) {
auto value = values->Get(i);
if (value->val_type() == executorch_flatbuffer::KernelTypes::Tensor) {
auto tensor_value = value->val_as_Tensor();
if (tensor_value->extra_tensor_info() != nullptr &&
if (tensor_value != nullptr &&
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

attribute_tensor_meta() also dereferences value without a null check (value->val_type()), which can segfault if ExecutionPlan.values contains null table offsets. Add values != nullptr / value != nullptr validation (or skip null entries) and return InvalidProgram for corrupted metadata.

Copilot uses AI. Check for mistakes.
Comment on lines +371 to +375
int64_t size = s_plan_->non_const_buffer_sizes()->Get(index + 1);
ET_CHECK_OR_RETURN_ERROR(
size >= 0,
InvalidProgram,
"memory_planned_buffer_size(%zu) has invalid negative size: %" PRId64,
Copy link

Copilot AI Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add test coverage for the new negative-size rejection in memory_planned_buffer_size(): a unit test can mutate non_const_buffer_sizes[index+1] to a negative value and verify the API returns InvalidProgram (and does not crash).

Copilot uses AI. Check for mistakes.
…ethodMeta

memory_planned_buffer_size() returned buffer sizes from the PTE FlatBuffer
without validation. Negative int64_t values would become huge size_t values
when used by callers for allocation, causing OOM/DoS.

- Reject negative values from memory_planned_buffer_size() with
  Error::InvalidProgram
- Reject negative tensor dimension sizes in calculate_nbytes() before
  the implicit cast to size_t, preventing wraparound to huge values
- Add null-pointer checks on tensor metadata (tensor_value, sizes,
  dim_order) in input_tensor_meta() and output_tensor_meta() to guard
  against malformed PTE files

This PR was authored with the assistance of Claude.
Copy link
Copy Markdown
Contributor

@JacobSzwejbka JacobSzwejbka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be ideal to just front load as much value checking as possible in program creation.

@lucylq lucylq merged commit 5e8a0df into main Apr 10, 2026
163 of 170 checks passed
@lucylq lucylq deleted the security28 branch April 10, 2026 18:27
jpiat pushed a commit to jpiat/executorch that referenced this pull request Apr 14, 2026
- Reject negative memory planning values
- Reject negative tensor dimension sizes in calculate_nbytes()
- Add null-pointer checks before TensorInfo::create to catch invalid
tensor metadata in the flatbuffer.

This PR was authored with the assistance of Claude.

Co-authored-by: Github Executorch <github_executorch@arm.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants